Goto

Collaborating Authors

 dl operation


Terra: Imperative-SymbolicCo-Executionof ImperativeDeepLearningPrograms

Neural Information Processing Systems

On the other hand, in the latter model, the Python interpreter embeds DL operations into a symbolic graph that represents the entire dataflow of aDNN. Thus, users should define their DL programs only with existing symbolic operations thatDLframeworks support.



HT-HEDL: High-Throughput Hypothesis Evaluation in Description Logic

Algahtani, Eyad

arXiv.org Artificial Intelligence

We present High-Throughput Hypothesis Evaluation in Description Logic (HT-HEDL). HT-HEDL is a high-performance hypothesis evaluation engine that accelerates hypothesis evaluation computations for inductive logic programming (ILP) learners using description logic (DL) for their knowledge representation; in particular, HT-HEDL targets accelerating computations for the $\mathcal{ALCQI}^{\mathcal{(D)}}$ DL language. HT-HEDL aggregates the computing power of multi-core CPUs with multi-GPUs to improve hypothesis computations at two levels: 1) the evaluation of a single hypothesis and 2) the evaluation of multiple hypotheses (i.e., batch of hypotheses). In the first level, HT-HEDL uses a single GPU or a vectorized multi-threaded CPU to evaluate a single hypothesis. In vectorized multi-threaded CPU evaluation, classical (scalar) CPU multi-threading is combined with CPU's extended vector instructions set to extract more CPU-based performance. The experimental results revealed that HT-HEDL increased performance using CPU-based evaluation (on a single hypothesis): from 20.4 folds using classical multi-threading to $\sim85$ folds using vectorized multi-threading. In the GPU-based evaluation, HT-HEDL achieved speedups of up to $\sim38$ folds for single hypothesis evaluation using a single GPU. To accelerate the evaluation of multiple hypotheses, HT-HEDL combines, in parallel, GPUs with multi-core CPUs to increase evaluation throughput (number of evaluated hypotheses per second). The experimental results revealed that HT-HEDL increased evaluation throughput by up to 29.3 folds using two GPUs and up to $\sim44$ folds using two GPUs combined with a CPU's vectorized multi-threaded evaluation.


Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

#artificialintelligence

The rapid evolution of deep neural networks (DNNs) has been fueled by the support of deep learning (DL) frameworks like TensorFlow and PyTorch. DL frameworks allow users to build and execute DNNs through Python programming. The standard execution model in DL frameworks is imperative execution: the Python Interpreter executes a DL program just as it treats a regular Python program. Let us go over a simple DL program to grasp the concept. Here, we assume that the condition the Interpreter first evaluates is True.